211 research outputs found

    Resilience for Asynchronous Iterative Methods for Sparse Linear Systems

    Get PDF
    Large scale simulations are used in a variety of application areas in science and engineering to help forward the progress of innovation. Many spend the vast majority of their computational time attempting to solve large systems of linear equations; typically arising from discretizations of partial differential equations that are used to mathematically model various phenomena. The algorithms used to solve these problems are typically iterative in nature, and making efficient use of computational time on High Performance Computing (HPC) clusters involves constantly improving these iterative algorithms. Future HPC platforms are expected to encounter three main problem areas: scalability of code, reliability of hardware, and energy efficiency of the platform. The HPC resources that are expected to run the large programs are planned to consist of billions of processing units that come from more traditional multicore processors as well as a variety of different hardware accelerators. This growth in parallelism leads to the presence of all three problems. Previously, work on algorithm development has focused primarily on creating fault tolerance mechanisms for traditional iterative solvers. Recent work has begun to revisit using asynchronous methods for solving large scale applications, and this dissertation presents research into fault tolerance for fine-grained methods that are asynchronous in nature. Classical convergence results for asynchronous methods are revisited and modified to account for the possible occurrence of a fault, and a variety of techniques for recovery from the effects of a fault are proposed. Examples of how these techniques can be used are shown for various algorithms, including an analysis of a fine-grained algorithm for computing incomplete factorizations. Lastly, numerous modeling and simulation tools for the further construction of iterative algorithms for HPC applications are developed, including numerical models for simulating faults and a simulation framework that can be used to extrapolate the performance of algorithms towards future HPC systems

    Conformal Boundary Conditions from Cutoff AdS3_3

    Full text link
    We construct a particular flow in the space of 2D Euclidean QFTs on a torus, which we argue is dual to a class of solutions in 3D Euclidean gravity with conformal boundary conditions. This new flow comes from a Legendre transform of the kernel which implements the TTˉT\bar{T} deformation, and is motivated by the need for boundary conditions in Euclidean gravity to be elliptic, i.e. that they have well-defined propagators for metric fluctuations. We demonstrate equivalence between our flow equation and variants of the Wheeler de-Witt equation for a torus universe in the so-called Constant Mean Curvature (CMC) slicing. We derive a kernel for the flow, and we compute the corresponding ground state energy in the low-temperature limit. Once deformation parameters are fixed, the existence of the ground state is independent of the initial data, provided the seed theory is a CFT. The high-temperature density of states has Cardy-like behavior, rather than the Hagedorn growth characteristic of TTˉT\bar{T}-deformed theories.Comment: 17 pages, 0 figure

    E2: Equity and Excellence Framework - A Pathway to Advancing Educational Equity and Excellence

    Get PDF
    Considering there is a national and global equity focused call to action, the Illinois Mathematics and Science Academy engaged in a process to institutionalize and operationalize Equity and Excellence to address educational inequities. This process included creating an educational case for engaging in Equity and Excellence, policy development, capacity building to engage in equity work, an inclusive and comprehensive data collection methodology, data meaning making, as well as an equity and excellence plan and scorecard development. This workshop will provide participants with an understanding of educational equity, share tools to assist educational institutions in drafting data-informed equity and excellence policy/plans, as well as provide a framework to score and measure progress in advancing equity

    Implementing Asynchronous Linear Solvers Using Non-Uniform Distributions

    Get PDF
    Asynchronous iterative methods present a mechanism to improve the performance of algorithms for highly parallel computational platforms by removing the overhead associated with synchronization among computing elements. This paper considers a class of asynchronous iterative linear system solvers that employ randomization to determine the component update orders, specifically focusing on the effects of drawing the order from non-uniform distributions. Results from shared-memory experiments with a two-dimensional finite-difference discrete Laplacian problem show that using distributions favoring the selection of components with a larger contribution to the residual may lead to faster convergence than selecting uniformly. Multiple implementations of the randomized asynchronous linear system solvers are considered and tested with various distributions and parameters. In the best case of parameter choices, average times for the normal and exponential distributions were, respectively, 13.3% and 17.3% faster than the performance with a uniform distribution, and were able to converge in approximately 10% fewer iterations than traditional stationary solvers

    TTÂŻ -deformed actions and (1,1) supersymmetry

    Get PDF
    We describe an algorithmic method to calculate the TT¯ deformed Lagrangian of a given seed theory by solving an algebraic system of equations. This method is derived from the topological gravity formulation of the deformation. This algorithm is far simpler than the direct partial differential equations needed in most earlier proposals. We present several examples, including the deformed Lagrangian of (1,1) supersymmetry. We show that this Lagrangian is off-shell invariant through order λ2 in the deformation parameter and verify its SUSY algebra through order λ.Fil: Coleman, Evan Austen. University of Stanford. Physics Department; Estados UnidosFil: Aguilera Damia, Jeremías. Comisión Nacional de Energía Atómica. Centro Atómico Bariloche; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Freedman, Daniel Z.. University of Stanford. Physics Department; Estados Unidos. Massachusetts Institute of Technology; Estados UnidosFil: Soni, Ronak M.. University of Stanford. Physics Department; Estados Unido

    In situ geochronology as a mission-enabling technology

    Get PDF
    Although there are excellent estimates of ages of terrains on Mars from crater counting, even a few absolute ages would serve to validate the calibration. Results with uncertainties, although much larger than those that could be achieved in labs on Earth, would be extremely valuable. While there are other possibilities for in situ geochronology instruments, we describe here two alternative technologies, being developed in JPL. There are two common features of both. The first is analysis by means of miniature mass spectrometer. The second is use of laser sampling to reduce or avoid sample handling, preparation and pre-treatment and equally importantly, to allow analysis of individual, texturally resolved minerals in coarse-grained rocks. This textural resolution will aid in selection of grains more or less enriched in the relevant elements and allow construction of isochrons for more precise dating. Either of these instruments could enable missions to Mars and other planetary bodies

    Multi-Element Abundance Measurements from Medium-Resolution Spectra. IV. Alpha Element Distributions in Milky Way Dwarf Satellite Galaxies

    Get PDF
    We derive the star formation histories of eight dwarf spheroidal (dSph) Milky Way satellite galaxies from their alpha element abundance patterns. Nearly 3000 stars from our previously published catalog (Paper II) comprise our data set. The average [alpha/Fe] ratios for all dSphs follow roughly the same path with increasing [Fe/H]. We do not observe the predicted knees in the [alpha/Fe] vs. [Fe/H] diagram, corresponding to the metallicity at which Type Ia supernovae begin to explode. Instead, we find that Type Ia supernova ejecta contribute to the abundances of all but the most metal-poor ([Fe/H] < -2.5) stars. We have also developed a chemical evolution model that tracks the star formation rate, Types II and Ia supernova explosions, and supernova feedback. Without metal enhancement in the supernova blowout, massive amounts of gas loss define the history of all dSphs except Fornax, the most luminous in our sample. All six of the best-fit model parameters correlate with dSph luminosity but not with velocity dispersion, half-light radius, or Galactocentric distance.Comment: 28 pages, 14 figures; accepted for publication in ApJ; very minor editorial corrections in v

    The Long-Baseline Neutrino Experiment: Exploring Fundamental Symmetries of the Universe

    Get PDF
    The preponderance of matter over antimatter in the early Universe, the dynamics of the supernova bursts that produced the heavy elements necessary for life and whether protons eventually decay --- these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our Universe, its current state and its eventual fate. The Long-Baseline Neutrino Experiment (LBNE) represents an extensively developed plan for a world-class experiment dedicated to addressing these questions. LBNE is conceived around three central components: (1) a new, high-intensity neutrino source generated from a megawatt-class proton accelerator at Fermi National Accelerator Laboratory, (2) a near neutrino detector just downstream of the source, and (3) a massive liquid argon time-projection chamber deployed as a far detector deep underground at the Sanford Underground Research Facility. This facility, located at the site of the former Homestake Mine in Lead, South Dakota, is approximately 1,300 km from the neutrino source at Fermilab -- a distance (baseline) that delivers optimal sensitivity to neutrino charge-parity symmetry violation and mass ordering effects. This ambitious yet cost-effective design incorporates scalability and flexibility and can accommodate a variety of upgrades and contributions. With its exceptional combination of experimental configuration, technical capabilities, and potential for transformative discoveries, LBNE promises to be a vital facility for the field of particle physics worldwide, providing physicists from around the globe with opportunities to collaborate in a twenty to thirty year program of exciting science. In this document we provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess.Comment: Major update of previous version. This is the reference document for LBNE science program and current status. Chapters 1, 3, and 9 provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess. 288 pages, 116 figure
    • …
    corecore